Generalized Nonparametric Regression via Penalized Likelihood

نویسندگان

  • Dennis D. Cox
  • Finbarr O'Sullivan
چکیده

We consider the asymptotic analysis of penalized likelihood type estimators for generalized non-parametric regression problems in which the target parameter is a vector valued function defined in terms of the conditional distribution of a response given a set of covariates, A variety of examples including ones related to generalized linear models and robust smoothing are covered by the theory. Upper bounds on rates of convergence for penalized likelihood-type estimators are obtained by approximating estimators in terms of one-step Taylor series expansions. AMS 1980 subject classifications. Primary, 62-G05, Secondary, 62J05, 41-A35, 41-A25, 47-A53, 45-LlO, 45-M05.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Penalized Bregman Divergence Estimation via Coordinate Descent

Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...

متن کامل

Penalized Likelihood-type Estimators for Generalized Nonparametric Regression

We consider the asymptotic analysis of penalized likelihood type estimators for generalized non-parametric regression problems in which the target parameter is a vector valued function defined in terms of the conditional distribution of a response given a set of covariates. A variety of examples including ones related to generalized linear models and robust smoothing are covered by the theory. ...

متن کامل

Penalized likelihood regression for generalized linear models with nonquadratic penalties

One popular method for fitting a regression function is regularization: minimize an objective function which enforces a roughness penalty in addition to coherence with the data. This is the case when formulating penalized likelihood regression for exponential families. Most smoothing methods employ quadratic penalties, leading to linear estimates, and are in general incapable of recovering disc...

متن کامل

Automatic Generalized Nonparametric Regression via Maximum Likelihood

A relatively recent development in nonparametric regression is the representation of spline-based smoothers as mixed model fits. In particular, generalized nonparametric regression (e.g. smoothingwith a binary response) corresponds to fitting a generalized linear mixedmodel. Automation, or data-driven smoothing parameter selection, can be achieved via (restricted) maximum likelihood estimation ...

متن کامل

Automatic Smoothing and Variable Selection via Regularization

This thesis focuses on developing computational methods and the general theory of automatic smoothing and variable selection via regularization. Methods of regularization are a commonly used technique to get stable solution to ill-posed problems such as nonparametric regression and classification. In recent years, methods of regularization have also been successfully introduced to address a cla...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007